-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Thread-local storage variable to replace atomic when update used_memory metrics to reduce contention. #308
base: unstable
Are you sure you want to change the base?
Conversation
…ontention. Signed-off-by: Lipeng Zhu <lipeng.zhu@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the server configurations you are using for the test? I didn't see them listed. Can you also just do a simple test with get/set?
* used_memory_tls array. */ | ||
static __thread int thread_index; | ||
/* MAX_THREADS_NUM = IO_THREADS_MAX_NUM(128) + BIO threads(3) + main thread(1). */ | ||
#define MAX_THREADS_NUM 132 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#define MAX_THREADS_NUM 132 | |
#define MAX_THREADS_NUM (IO_THREADS_MAX_NUM + BIO_THREAD_COUNT + 1) |
Can we do something like this? I think we need to implement a new define for BIO thread count. Then we don't have to worry about this changing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IO_THREADS_MAX_NUM
is defined in networking.c
and BIO_THREAD_COUNT
has a similar define BIO_WORKER_NUM
in bio.c
. How do we reuse the define in the zmalloc.c
? It is a very low level API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah there is a layering concern here. Can you add a c_assert
to make sure the condition of (IO_THREADS_MAX_NUM + BIO_THREAD_COUNT + 1) < IO_THREADS_MAX_NUM
is not violated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, if we go with this route, we should probably introduce a new header to host all thread local storage related definition. It feels a bit out of place to define MAX_THREADS_NUM
in zmalloc.c. We might have other consumers of this macro in the future. I wonder if we should call the new header system.h
, which would be at the lowest layer.
src/zmalloc.c
Outdated
atomicGet(used_memory,um); | ||
size_t um = 0; | ||
for (int i = 0; i < total_active_threads; i++) { | ||
um += used_memory_tls[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
um += used_memory_tls[i]; | |
serverAssert(i < MAX_THREADS_NUM); | |
um += used_memory_tls[i]; |
src/zmalloc.c
Outdated
|
||
/* Register the thread index in start_routine. */ | ||
void zmalloc_register_thread_index(void) { | ||
atomicGetIncr(total_active_threads, thread_index, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
atomicGetIncr(total_active_threads, thread_index, 1); | |
serverAssert(total_active_threads < MAX_THREADS_NUM); | |
atomicGetIncr(total_active_threads, thread_index, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since both total_active_threads
and thread_index
are of size_t
, whose sizes match the native word size of the platform, the proposed change should work.
Strictly speaking though (if size_t
was 64 bit on a 32-bit platform), L112 and L113 need to be flipped. We also shouldn't touch the global var total_active_threads
without the atomic operation; instead, we should access the thread local var thread_index
.
atomicGetIncr(total_active_threads, thread_index, 1); | |
atomicGetIncr(total_active_threads, thread_index, 1); | |
serverAssert(thread_index < MAX_THREADS_NUM); |
Just update the server config and SET/GET results in top comment. |
Signed-off-by: Lipeng Zhu <lipeng.zhu@intel.com>
atomicGet(used_memory,um); | ||
assert(total_active_threads < MAX_THREADS_NUM); | ||
size_t um = 0; | ||
for (int i = 0; i < total_active_threads; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering how important is it for zmalloc_used_memory to return the correct used_memory? As we loop through to add up the memory used by each thread, the threads that had been counted may continue to allocate or to free memory, so the sum might under or over account the actual memory used by all threads.
I don't think this is an issue if the function is only used for metrics, but overMaxmemoryAfterAlloc (evict.c) is not for metrics, we probably should ensure (and call out) that undercounting memory usage would not be an issue there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@qlong I understand your concern, at the beginning, I want to keep an approximate value for used_memory, each thread keep its own used_memory_thread, only used_memory_thread greater than a threshold(e.g. 1k) then update to global atomic used_memory, the pseudocode will be like this, but I saw the test case in https://github.com/valkey-io/valkey/blob/unstable/src/zmalloc.c#L917, seems it need an accurate used_memory? Back to this PR, each element in used_memory_thread array only have one writer to update the value, when read, the reader may have gap between oldvalue
and newvalue
, depends on the read timing.
@madolson What's your thought?
-#define update_zmalloc_stat_alloc(__n) atomicIncr(used_memory,(__n))
-#define update_zmalloc_stat_free(__n) atomicDecr(used_memory,(__n))
+#define update_zmalloc_stat_alloc(__n) do { \
+ used_memory_thread += (__n); \
+ if (used_memory_thread > 1024) atomicIncr(used_memory, used_memory_thread); \
+} while (0) \
+
+#define update_zmalloc_stat_free(__n) do { \
+ used_memory_thread -= (__n); \
+ if (used_memory_thread > 1024) atomicDecr(used_memory, used_memory_thread); \
+} while (0) \
static serverAtomic size_t used_memory = 0;
+static __thread size_t used_memory_thread = 0;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a few quick thoughts
- I see value in removing atomic operations
- Since we use
size_t
here for the counters, which should be natively aligned as well, the worst we can get would be staleness as opposed to inconsistency. - The absolute accuracy doesn't have much value for metrics. The value will get stale immediately after the
INFO
command returns, whether it is absolutely accurate or not. As long as the value is close enough I think we are good. We can change the tests - Agreed that
overMaxmemoryAfterAlloc
is a higher bar than metrics. My gut feeling is that the staleness brought by the relaxed memory access should still be negligible. I doubt we could accumulate a lot of changes before the CPU flushes out its cache to the main memory. Curious to know if anyone has more definitive information. - The absolute accuracy requirement is directly at odds with the performance overhead incurred by the atomic operation. So one of them has to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems we all had concern about the absolute accuracy of used_memory
in overMaxmemoryAfterAlloc
, IMO, this is not a problem, because in the definition of overMaxmemoryAfterAlloc
, the comparison between moremem
and used_memory
(Their semantics are different) is not absolute accuracy if we consider the gap between requested memory and actual allocated memory from allocator.
int overMaxmemoryAfterAlloc(size_t moremem) {
if (!server.maxmemory) return 0; /* No limit. */
/* Check quickly. */
size_t mem_used = zmalloc_used_memory();
if (mem_used + moremem <= server.maxmemory) return 0;
size_t overhead = freeMemoryGetNotCountedMemory();
mem_used = (mem_used > overhead) ? mem_used - overhead : 0;
return mem_used + moremem > server.maxmemory;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kindly ping @PingXie , any comments about this, do we need to continue this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is a way to address this concern while retaining the most of the performance benefit if we keep the existing io-threading model. The idea would be to keep a mirrored per-thread counter array such that each thread periodically "flushes" its thread local counter value to the corresponding element in the global per-thread counter array using atomic operations. This would work with the current io-threading model where io-threads and main threads run in lock-steps. Every io thread would need to push its local value to this global array when it is done with its work, but only for once. Then by the time the main thread starts executing, it would have the accurate data. We can reduce the number of atomic operations to one per io-thread per epoll session.
We would have to revisit this if we went for a different threading model (such as #22) .
That said, I am still not sure how accurate this information needs to be. zmalloc_used_memory
doesn't represent the RSS usage of the process anyways. I think it would be really helpful if we could somehow establish the worst possible case, empirically or even a SWAG.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said, I am still not sure how accurate this information needs to be.
zmalloc_used_memory
doesn't represent the RSS usage of the process anyways.
IMO, there is no meaningful of the absolute accurate zmalloc_used_memory
, to achieve the goal, the cost is very high, not only the expensive atomic operation in every zmalloc/zfree
operation, but also the call of je_malloc_usable_size
is very high (From the cycles hotspot snapshot in top comment, you can find the cycles ratio of je_malloc_usable_size
is the top 1 hotspot). If we can accept the a little latency of zmalloc_used_memory
, ideally we may have at least 10% perf gain.
@PingXie Do you think we need open an issue to discuss with the rest of @valkey-io/core-team of the necessary of absolute accurate zmalloc_used_memory
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
src/zmalloc.c
Outdated
@@ -87,10 +88,23 @@ void zlibc_free(void *ptr) { | |||
#define dallocx(ptr,flags) je_dallocx(ptr,flags) | |||
#endif | |||
|
|||
#define update_zmalloc_stat_alloc(__n) atomicIncr(used_memory,(__n)) | |||
#define update_zmalloc_stat_free(__n) atomicDecr(used_memory,(__n)) | |||
#define update_zmalloc_stat_alloc(__n) used_memory_thread[thread_index] += (__n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thread_index is thread local, but used_memory_thread[thread_index] is not. Without some sort of locking, a thread might get strange value from used_memory_thread[thread_index] if another thread is updating it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean about the strange value
? The reader can only read old_value
or new_value
in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lipzhu I meant corrupted value due to race condition among threads. I did not see (maybe I have missed) how we guarantee reader read either old or new value. The reader can see corrupted value of used_memory_thread[i] if the writer is updating used_memory_thread[i] at the same time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the semantics is a little different from the origin implementation, but do we really need such accurate used_memory
? The cost is to add the expensive lock add/sub
instruction in the low-level API like zmalloc/zfree functions, especially most calls to these functions come from the main thread.
And the new_value
will be seen in next read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lipzhu I think data corruption caused by race condition is not about accuracy, the corrupted data can be random from 0 to 2^64-1. This can lead to system correctness or stability issue.
The original implementation uses atomic number, which leverages support from CPU and is quite efficient as counter that gets updated frequently from different threads.
@valkey-io/core-team, could you help to take a look at this patch? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i did not take a deep look, how do we handle module threads?
Hi @enjoy-binbin , thanks for your comments for this patch. BTW, your blog in yuque help me a lot in understanding redis/valkey.
Sorry I missed the modules thread, maybe I can remove the explicit call of the |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## unstable #308 +/- ##
============================================
- Coverage 70.23% 70.21% -0.02%
============================================
Files 109 109
Lines 59955 59960 +5
============================================
- Hits 42110 42103 -7
- Misses 17845 17857 +12
|
Signed-off-by: Lipeng Zhu <lipeng.zhu@intel.com>
@enjoy-binbin @PingXie @madolson Thoughts about this patch? |
Kindly ping @valkey-io/core-team. |
Description
This patch try to introduce Thread-local storage variable to replace atomic for
zmalloc
to reduce unnecessary contention.Problem Statement
zmalloc
andzfree
related functions will update theused_memory
for each operation, and they are called very frequency. From the benchmark of memtier_benchmark-1Mkeys-load-stream-5-fields-with-100B-values-pipeline-10.yml , the cycles ratio ofzmalloc
andzfree
are high, they are wrappers for the lower allocator library, it should not take too much cycles. And most of the cycles are contributed bylock add
andlock sub
, they are expensive instructions. From the profiling, the metrics' update mainly come from the main thread, use a TLS will reduce a lot of contention.Performance Impact
Test Env
Start Server
taskset -c 0 ~/valkey/src/valkey-server /tmp/valkey_1.conf
By using the benchmark memtier_benchmark-1Mkeys-load-stream-5-fields-with-100B-values-pipeline-10.yml.
memtier_benchmark -s 127.0.0.1 -p 9001 "--pipeline" "10" "--data-size" "100" --command "XADD __key__ MAXLEN ~ 1 * field __data__" --command-key-pattern="P" --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 50 -t 4 --hide-histogram
We can observe more than 6% QPS gain.
For the other benchmarks SET/GET, using the commands like:
taskset -c 6-9 ~/valkey/src/valkey-benchmark -p 9001 -t set,get -d 100 -r 1000000 -n 1000000 -c 50 --threads 4
No perf gain and regression.
With pipeline enabled, I can observe 4% perf gain with test case.
taskset -c 4-7 memtier_benchmark -s 127.0.0.1 -p 9001 "--pipeline" "10" "--data-size" "100" --ratio 1:0 --key-pattern P:P --key-minimum=1 --key-maximum 1000000 --test-time 180 -c 50 -t 4 --hide-histogram